Image quality assessment (IQA) forms a natural and often straightforward undertaking for humans, yet effective automation of the task remains highly challenging. Recent metrics from the deep learning community commonly compare image pairs during training to improve upon traditional metrics such as PSNR or SSIM. However, current comparisons ignore the fact that image content affects quality assessment as comparisons only occur between images of similar content. This restricts the diversity and number of image pairs that the model is exposed to during training. In this paper, we strive to enrich these comparisons with content diversity. Firstly, we relax comparison constraints, and compare pairs of images with differing content. This increases the variety of available comparisons. Secondly, we introduce listwise comparisons to provide a holistic view to the model. By including differentiable regularizers, derived from correlation coefficients, models can better adjust predicted scores relative to one another. Evaluation on multiple benchmarks, covering a wide range of distortions and image content, shows the effectiveness of our learning scheme for training image quality assessment models.
translated by 谷歌翻译
基于密度的分布(OOD)检测最近显示了检测OOD图像的任务不可靠。基于各种密度比的方法实现了良好的经验性能,但是方法通常缺乏原则性的概率建模解释。在这项工作中,我们建议在建立基于能量的模型并采用不同基础分布的新框架下统一基于密度比的方法。在我们的框架下,密度比可以看作是隐式语义分布的非均衡密度。此外,我们建议通过类比率估计直接估计数据样本的密度比。与最近的工作相比,我们报告了有关OOD图像问题的竞争结果,这些工作需要对任务进行深层生成模型的培训。我们的方法使一个简单而有效的途径可以解决OOD检测问题。
translated by 谷歌翻译
数码相机通过图像信号处理器(ISP)将传感器原始读数转换为RGB图像。诸如图像去噪和颜色恒定的计算摄影任务通常在原始域中进行,部分原因是由于固有的硬件设计,而且由于引起了由直接传感器读数导致的噪声统计的吸引力的吸引力。尽管如此,与可用RGB数据的丰富和多样性相比,原始图像的可用性有限。最近的方法已经尝试通过估计RGB对原始映射来弥合这个差距:可手工制作的基于模型的方法,这些方法通常需要手动参数微调,而端到端的学习神经网络需要大量的培训数据,有时与复杂的训练程序,并且通常缺乏解释性和参数控制。为了解决这些现有的限制,我们提出了一种基于混合模型的基于混合模型和数据驱动的ISP,其构建在规范ISP运营中,并且是学习和可解释的。我们所提出的可逆模型,能够在原始和RGB域之间双向映射,采用丰富的参数表示的端到端学习,即词典,即没有直接参数监督,另外启用简单且合理的数据增强。我们证明我们的数据生成过程的价值在原始图像重建和原始图像去噪任务下,在两者中获得最先进的性能。此外,我们表明我们的ISP可以从少数数据样本中学习有意义的映射,并且尽管只有少数或零地面标签,但基于大字典的数据增强训练的那种培训的培训模型是有竞争力的。
translated by 谷歌翻译
在这项工作中,我们介绍了一种新的长尾识别战略,通过无训练知识转移来解决尾课的几次射门问题。我们的目标是将从信息丰富的常见课程获得的知识转移到语义上类似,但数据饥饿的罕见课程,以获得更强的尾级陈述。我们利用类原型和学习余弦分类器在特征空间中提供两个不同,互补的类集群中心的不同互补表示,并使用注意机制从常见类别中选择和重新测试学习的分类器特征,以获得更高质量的珍稀类表示。我们的知识转移过程自由培训,减少过度风险,并可能够为新课程提供持续的分类器。实验表明,我们的方法可以在罕见的阶级提高显着的性能,同时保持稳健的普通类性能,优于直接可比的最先进模型。
translated by 谷歌翻译
分发(OOD)检测和无损压缩构成了两个问题,可以通过对第一个数据集的概率模型进行训练来解决,其中在第二数据集上的后续似然评估,其中数据分布不同。通过在可能性方面定义概率模型的概括,我们表明,在图像模型的情况下,泛展能力通过本地特征主导。这激励了我们对本地自回归模型的提议,该模型专门为局部图像特征而达到改善的性能。我们将拟议的模型应用于检测任务,并在未引入其他数据的情况下实现最先进的无监督的检测性能。此外,我们使用我们的模型来构建新的无损图像压缩机:Nelloc(神经本地无损压缩机)和报告最先进的压缩率和模型大小。
translated by 谷歌翻译
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
translated by 谷歌翻译
Early detection of relevant locations in a piece of news is especially important in extreme events such as environmental disasters, war conflicts, disease outbreaks, or political turmoils. Additionally, this detection also helps recommender systems to promote relevant news based on user locations. Note that, when the relevant locations are not mentioned explicitly in the text, state-of-the-art methods typically fail to recognize them because these methods rely on syntactic recognition. In contrast, by incorporating a knowledge base and connecting entities with their locations, our system successfully infers the relevant locations even when they are not mentioned explicitly in the text. To evaluate the effectiveness of our approach, and due to the lack of datasets in this area, we also contribute to the research community with a gold-standard multilingual news-location dataset, NewsLOC. It contains the annotation of the relevant locations (and their WikiData IDs) of 600+ Wikinews articles in five different languages: English, French, German, Italian, and Spanish. Through experimental evaluations, we show that our proposed system outperforms the baselines and the fine-tuned version of the model using semi-supervised data that increases the classification rate. The source code and the NewsLOC dataset are publicly available for being used by the research community at https://github.com/vsuarezpaniagua/NewsLocation.
translated by 谷歌翻译
In recent years multi-label, multi-class video action recognition has gained significant popularity. While reasoning over temporally connected atomic actions is mundane for intelligent species, standard artificial neural networks (ANN) still struggle to classify them. In the real world, atomic actions often temporally connect to form more complex composite actions. The challenge lies in recognising composite action of varying durations while other distinct composite or atomic actions occur in the background. Drawing upon the success of relational networks, we propose methods that learn to reason over the semantic concept of objects and actions. We empirically show how ANNs benefit from pretraining, relational inductive biases and unordered set-based latent representations. In this paper we propose deep set conditioned I3D (SCI3D), a two stream relational network that employs latent representation of state and visual representation for reasoning over events and actions. They learn to reason about temporally connected actions in order to identify all of them in the video. The proposed method achieves an improvement of around 1.49% mAP in atomic action recognition and 17.57% mAP in composite action recognition, over a I3D-NL baseline, on the CATER dataset.
translated by 谷歌翻译
Large language models (LLMs) have demonstrated excellent zero-shot generalization to new language tasks. However, effective utilization of LLMs for zero-shot visual question-answering (VQA) remains challenging, primarily due to the modality disconnection and task disconnection between LLM and VQA task. End-to-end training on vision and language data may bridge the disconnections, but is inflexible and computationally expensive. To address this issue, we propose \emph{Img2Prompt}, a plug-and-play module that provides the prompts that can bridge the aforementioned modality and task disconnections, so that LLMs can perform zero-shot VQA tasks without end-to-end training. In order to provide such prompts, we further employ LLM-agnostic models to provide prompts that can describe image content and self-constructed question-answer pairs, which can effectively guide LLM to perform zero-shot VQA tasks. Img2Prompt offers the following benefits: 1) It can flexibly work with various LLMs to perform VQA. 2)~Without the needing of end-to-end training, it significantly reduces the cost of deploying LLM for zero-shot VQA tasks. 3) It achieves comparable or better performance than methods relying on end-to-end training. For example, we outperform Flamingo~\cite{Deepmind:Flamingo2022} by 5.6\% on VQAv2. On the challenging A-OKVQA dataset, our method even outperforms few-shot methods by as much as 20\%.
translated by 谷歌翻译
As information extraction (IE) systems have grown more capable at whole-document extraction, the classic task of \emph{template filling} has seen renewed interest as a benchmark for evaluating them. In this position paper, we call into question the suitability of template filling for this purpose. We argue that the task demands definitive answers to thorny questions of \emph{event individuation} -- the problem of distinguishing distinct events -- about which even human experts disagree. We show through annotation studies and error analysis that this raises concerns about the usefulness of template filling evaluation metrics, the quality of datasets for the task, and the ability of models to learn it. Finally, we consider possible solutions.
translated by 谷歌翻译